We learned about conditional independence as a way of modeling probabilistic facts about the world
and move towards the kind of main reasoning tool namely Bayesian networks that allow us to
actually go the step from math to modeling.
So the basic idea was that we build up these networks,
networks where the nodes are random variables,
and the arrows are dependencies.
And more importantly, non-arrows mean conditional independence.
And this is kind of the prototypical kind of topology that we're interested in.
We have two conditionally independent things given their parents.
So that's the idea, and if that's the case,
then we can start the mill grinding
doing normalization and marginalization.
We're actually interested in situations.
We have a certain evidence.
The agent has seen or perceived something,
and we want to compute certain probability distributions
given that evidence.
So we do the normalization and marginalization trick
to get at these kind of probabilities,
and then these we compute with a chain rule.
And here is where the Bayesian networks kicks in.
Basically, in these conditional probabilities,
we can drop lots of stuff, namely everything
except the parents of a node.
And that's going to be where the good stuff happens, namely
things get less complex.
That's where we exploit conditional independence.
We've talked about applications.
And we finished up this chin.)
Presenters
Zugänglich über
Offener Zugang
Dauer
00:02:41 Min
Aufnahmedatum
2021-03-30
Hochgeladen am
2021-03-31 10:28:20
Sprache
en-US
Recap: Introduction
Main video on the topic in chapter 4 clip 1.